737 research outputs found

    Predicting Text Quality: Metrics for Content, Organization and Reader Interest

    Get PDF
    When people read articles---news, fiction or technical---most of the time if not always, they form perceptions about its quality. Some articles are well-written and others are poorly written. This thesis explores if such judgements can be automated so that they can be incorporated into applications such as information retrieval and automatic summarization. Text quality does not involve a single aspect but is a combination of numerous and diverse criteria including spelling, grammar, organization, informative nature, creative and beautiful language use, and page layout. In the education domain, comprehensive lists of such properties are outlined in the rubrics used for assessing writing. But computational methods for text quality have addressed only a handful of these aspects, mainly related to spelling, grammar and organization. In addition, some text quality aspects could be more relevant for one genre versus another. But previous work have placed little focus on specialized metrics based on the genre of texts. This thesis proposes new insights and techniques to address the above issues. We introduce metrics that score varied dimensions of quality such as content, organization and reader interest. For content, we present two measures: specificity and verbosity level. Specificity measures the amount of detail present in a text while verbosity captures which details are essential to include. We measure organization quality by quantifying the regularity of the intentional structure in the article and also using the specificity levels of adjacent sentences in the text. Our reader interest metrics aim to identify engaging and interesting articles. The development of these measures is backed by the use of articles from three different genres: academic writing, science journalism and automatically generated summaries. Proper presentation of content is critical during summarization because summaries have a word limit. Our specificity and verbosity metrics are developed with this genre as the focus. The argumentation structure of academic writing lends support to the idea of using intentional structure to model organization quality. Science journalism articles convey research findings in an engaging manner and are ideally suited for the development and evaluation of measures related to reader interest

    A Bayesian Method to Incorporate Background Knowledge during Automatic Text Summarization

    Get PDF
    In order to summarize a document, it is often useful to have a background set of documents from the domain to serve as a reference for determining new and important information in the input document. We present a model based on Bayesian surprise which provides an intuitive way to identify surprising information from a summarization input with respect to a background corpus. Specifically, the method quantifies the degree to which pieces of information in the input change one’s beliefs’ about the world represented in the background. We develop systems for generic and update summarization based on this idea. Our method provides competitive content selection performance with particular advantages in the update task where systems are given a small and topical background corpus

    Creating Local Coherence: An Empirical Assessment

    Get PDF
    Two of the mechanisms for creating natural transitions between adjacent sentences in a text, resulting in local coherence, involve discourse relations and switches of focus of attention between discourse entities. These two aspects of local coherence have been traditionally discussed and studied separately. But some empirical studies have given strong evidence for the necessity of understanding how the two types of coherence-creating devices interact. Here we present a joint corpus study of discourse relations and entity coherence exhibited in news texts from the Wall Street Journal and test several hypotheses expressed in earlier work about their interaction.

    General Versus Specific Sentences: Automatic Identification and Application to Analysis of News Summaries

    Get PDF
    In this paper, we introduce the task of identifying general and specific sentences in news articles. Instead of embarking on a new annotation effort to obtain data for the task, we explore the possibility of leveraging existing large corpora annotated with discourse information to train a classifier. We introduce several classes of features that capture lexical and syntactic information, as well as word specificity and polarity. We then use the classifier to analyze the distribution of general and specific sentences in human and machine summaries of news articles. We discover that while all types of summaries tend to be more specific than the original documents, human abstracts contain a more balanced mix of general and specific sentences but automatic summaries are overwhelmingly specific. Our findings give strong evidence for the need for a new task in (abstractive) summarization: identification and generation of general sentences

    A corpus of science journalism for analyzing writing quality

    Get PDF
    We introduce a corpus of science journalism articles, categorized in three levels of writing quality. The corpus fulï¬lls a glaring need for realistic data on which applications concerned with predicting text quality can be developed and evaluated. In this article we describe how we identiï¬ed, guided by the judgements of renowned writers, samples of extraordinarily well-written pieces and how these were expanded to a larger set of typical journalistic writing. We provide details about the corpus and the text quality evaluations it can support. Our intention is to further extend the corpus with annotations of phenomena that reveal quantiï¬able differences between levels of writing quality. Here we introduce two of the many types of annotation on the sentence level that distinguish amazing from typical writing: text generality/speciï¬city and communicative goal. We explore the feasibility of acquiring annotations automatically, and verify that such features are indeed predictive of writing quality. We ï¬nd that the annotation of general/speciï¬c on sentence level can be performed reasonably accurately fully automatically, while automatic annotations of communicative goal reveals salient characteristics of journalistic writing but does not align with categories we wish to annotate in future work

    Knowledge management and history

    Get PDF
    Capitalisation of the history of a technology, a technique or a concept within an industrial company is relevant to historians. However it largely exceeds the historical problems from a Knowledge Management point of view. In this context, it can be the subject of specific approaches especially Knowledge Engineering. However, it faces two types of difficulties: - The techniques in History have few modelling tools, and are even rather reticent with the use of such tools. - Knowledge Engineering doesn't often address historical knowledge modelling, for tracing knowledge evolution. It is however possible to develop robust and validated methods, tools and techniques which take into account these two approaches, which, if they function in synergy, appear rich and fertile.History, MASK, Knowledge management, Knowledge engineering, History of techniques

    Verbose, Laconic or Just Right: A Simple Computational Model of Content Appropriateness under Length Constraints

    Get PDF
    Length constraints impose implicit requirements on the type of content that can be included in a text. Here we pro- pose the first model to computationally assess if a text deviates from these requirements. Specifically, our model predicts the appropriate length for texts based on content types present in a snippet of constant length. We consider a range of features to approximate content type, including syntactic phrasing, constituent compression probability, presence of named entities, sentence specificity and intersentence continuity. Weights for these features are learned using a corpus of summaries written by experts and on high quality journalistic writing. During test time, the difference between actual and predicted length allows us to quantify text verbosity. We use data from manual evaluation of summarization systems to assess the verbosity scores produced by our model. We show that the automatic verbosity scores are significantly negatively correlated with manual content quality scores given to the summaries

    Which Step Do I Take First? Troubleshooting with Bayesian Models

    Get PDF
    Online discussion forums and community question-answering websites provide one of the primary avenues for online users to share information. In this paper, we propose text mining techniques which aid users navigate troubleshooting-oriented data such as questions asked on forums and their suggested solutions. We introduce Bayesian generative models of the troubleshooting data and apply them to two interrelated tasks (a) predicting the complexity of the solutions (e.g., plugging a keyboard in the computer is easier compared to installing a special driver) and (b) presenting them in a ranked order from least to most complex. Experimental results show that our models are on par with human performance on these tasks, while outperforming baselines based on solution length or readability
    corecore